uk researchers

In [1]:
import pandas as pd
import holoviews as hv
import fastparquet as fp

from colorcet import fire
from datashader.bundling import directly_connect_edges, hammer_bundle

from holoviews.operation.datashader import datashade, dynspread
from holoviews.operation import decimate

from dask.distributed import Client
client = Client()

hv.notebook_extension('bokeh','matplotlib')

decimate.max_samples=20000
dynspread.threshold=0.01
datashade.cmap=fire[40:]
sz = dict(width=150,height=150)

%opts RGB [xaxis=None yaxis=None show_grid=False bgcolor="black"]

The files are stored in the efficient Parquet format:

In [2]:
r_nodes_file = '../data/calvert_uk_research2017_nodes.snappy.parq'
r_edges_file = '../data/calvert_uk_research2017_edges.snappy.parq'

r_nodes = hv.Points(fp.ParquetFile(r_nodes_file).to_pandas(index='id'), label="Nodes")
r_edges = hv.Curve( fp.ParquetFile(r_edges_file).to_pandas(index='id'), label="Edges")
len(r_nodes),len(r_edges)
Out[2]:
(15001, 593915)

We can render each collaboration as a single-line direct connection, but the result is a dense tangle:

In [3]:
%%opts RGB [tools=["hover"] width=400 height=400] 

%time r_direct = hv.Curve(directly_connect_edges(r_nodes.data, r_edges.data),label="Direct")

dynspread(datashade(r_nodes,cmap=["cyan"])) + \
datashade(r_direct)
CPU times: user 8.61 s, sys: 502 ms, total: 9.11 s
Wall time: 8.93 s
Out[3]:

Detailed substructure of this graph becomes visible after bundling edges using a variant of Hurter, Ersoy, & Telea (ECV-2012) , which takes several minutes even using multiple cores with Dask :

In [4]:
%time r_bundled = hv.Curve(hammer_bundle(r_nodes.data, r_edges.data),label="Bundled")
CPU times: user 33.6 s, sys: 12.1 s, total: 45.7 s
Wall time: 3min 10s
In [5]:
%%opts RGB [tools=["hover"] width=400 height=400] 

dynspread(datashade(r_nodes,cmap=["cyan"])) + datashade(r_bundled)
Out[5]:

Zooming into these plots reveals interesting patterns (if you are running a live Python server), but immediately one then wants to ask what the various groupings of nodes might represent. With a small number of nodes or a small number of categories one could color-code the dots (using datashader's categorical color coding support), but here we just have thousands of indistinguishable dots. Instead, let's use hover information so the viewer can at least see the identity of each node on inspection.

To do that, we'll first need to pull in something useful to hover, so let's load the names of each institution in the researcher list and merge that with our existing layout data:

In [6]:
node_names = pd.read_csv("../data/calvert_uk_research2017_nodes.csv", index_col="node_id", usecols=["node_id","name"])
node_names = node_names.rename(columns={"name": "Institution"})
node_names

r_nodes_named = pd.merge(r_nodes.data, node_names, left_index=True, right_index=True)
r_nodes_named.tail()
Out[6]:
x y Institution
33517 -8832.56100 1903.04940 The Asset Factor Limited
33519 -9448.65500 1292.72130 Ingenia Limited
33522 -1256.02720 2628.33400 United Therapeutics
33523 45.72761 -365.93396 Max Fordham LLP
33525 -8857.48100 1426.97060 First Greater Western Limited

We can now overlay a set of points on top of the datashaded edges, which will provide hover information for each node. Here, the entire set of 15000 nodes would be reasonably feasible to plot, but to show how to work with larger datasets we wrap the hv.Points() call with decimate so that only a finite subset of the points will be shown at any one time. If a node of interest is not visible in a particular zoom, then you can simply zoom in on that region; at some point the number of visible points will be below the specified decimate limit and the required point should be revealed.

In [7]:
%%opts Points (color="cyan") [tools=["hover"] width=900 height=650] 
datashade(r_bundled, width=900, height=650) * \
decimate( hv.Points(r_nodes_named),max_samples=10000)
Out[7]:

If you click around and hover, you should see interesting groups of nodes, and can then set up further interactive tools using HoloViews' stream support to reveal aspects relevant to your research interests or questions.

As you can see, datashader lets you work with very large graph datasets, though there are a number of decisions to make by trial and error, you do have to be careful when doing computationally expensive operations like edge bundling, and interactive information will only be available for a limited subset of the data at any one time due to data-size limitations of current web browsers.


Right click to download this notebook from GitHub.